Reinforcement Learning Based Adaptive Sampling: REAPing Rewards by Exploring Protein Conformational Landscapes
نویسندگان
چکیده
منابع مشابه
REinforcement learning based Adaptive samPling: REAPing Rewards by Exploring Protein Conformational Landscapes
One of the key limitations of Molecular Dynamics simulations is the computational intractability of sampling protein conformational landscapes with either large system size or long timescales. To overcome this bottleneck, we present the REinforcement learning based Adaptive samPling (REAP) algorithm that aims to sample a landscape faster than conventional simulation methods by identifying react...
متن کاملReinforcement Learning Without Rewards
Machine learning can be broadly defined as the study and design of algorithms that improve with experience. Reinforcement learning is a variety of machine learning that makes minimal assumptions about the information available for learning, and, in a sense, defines the problem of learning in the broadest possible terms. Reinforcement learning algorithms are usually applied to “interactive” prob...
متن کاملLearning Shaping Rewards in Model-based Reinforcement Learning
Potential-based reward shaping has been shown to be a powerful method to improve the convergence rate of reinforcement learning agents. It is a flexible technique to incorporate background knowledge into temporal-difference learning in a principled way. However, the question remains how to compute the potential which is used to shape the reward that is given to the learning agent. In this paper...
متن کاملAdaptive conformational sampling based on replicas.
Computer simulations of biomolecules such as molecular dynamics simulations are limited by the time scale of conformational rearrangements. Several sampling techniques are available to search the multi-minima free energy landscape but most efficient, time-dependent methods do generally not produce a canonical ensemble. A sampling algorithm based on a self-regulating ladder of searching copies i...
متن کاملVariance-Based Rewards for Approximate Bayesian Reinforcement Learning
The explore–exploit dilemma is one of the central challenges in Reinforcement Learning (RL). Bayesian RL solves the dilemma by providing the agent with information in the form of a prior distribution over environments; however, full Bayesian planning is intractable. Planning with the mean MDP is a common myopic approximation of Bayesian planning. We derive a novel reward bonus that is a functio...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The Journal of Physical Chemistry B
سال: 2018
ISSN: 1520-6106,1520-5207
DOI: 10.1021/acs.jpcb.8b06521